![]() Systems and methods for the detection of forgery and analysis of live presence and computer-readable
专利摘要:
the present invention relates to the detection of forgery and analysis of live presence that are performed using a software based solution on a user's device, such as a smartphone that has a camera, audio output component (for example, headset headphones), and audio output component (for example, microphone). one or more audio signals are output from the audio output component of the user's device, reflect off a target, and are received back into the audio output component of the device. based on the reflections, a determination is made as if the target were comprised of a three-dimensional face type structure and / or face type fabric. using at least this determination, a finding is made as if the target were likely to be falsified, rather than a legitimate, living person. 公开号:BR112017027057B1 申请号:R112017027057-9 申请日:2016-05-31 公开日:2020-05-12 发明作者:Reza R. Derakhshani;Teply Joel 申请人:EyeVerify Inc.; IPC主号:
专利说明:
Descriptive Report of the Invention Patent for SYSTEMS AND METHODS FOR DETECTION OF FALSIFICATION AND ANALYSIS OF LIVING PRESENCE AND LEGIBLE MEDIA BY COMPUTER. Cross Reference to Related Application [001] This application claims the priority and benefit of US Provisional Patent Application No. 62 / 180,481, filed June 16, 2015, and entitled Liveness Analysis Using Vitals Detection, completely incorporated by reference. Technical Field [002] The present invention generally refers to the image, the audible signal and the analysis of the vibratory signal and, in particular, to the techniques of image and signal processing to detect whether a subject represented in an image is alive. Background [003] It is often desirable to restrict access to property or resources to specific individuals. Biometric systems can be used to authenticate an individual's identity to grant or deny access to a resource. For example, iris scanners can be used by a biometric security system to identify an individual based on unique structures in the individual's iris. Such a system may erroneously authorize an imposter, however, if the imposter presents to scan a pre-recorded image or video of an authorized person's face. This fake image or video can be displayed on a monitor, such as a cathode ray tube (CRT) or liquid crystal (LCD) screen, in bright photographs, etc., taken in front of a camera used for scanning. Other counterfeiting techniques include the use of a photographically accurate three-dimensional mask of a legitimate user's face. [004] A category of existing anti-counterfeiting measures is Petition 870200030894, dated 3/6/2020, p. 52/100 2/39 focuses mainly on still images (for example, based on photography). These measures assume that static counterfeiting attacks do not reproduce natural movements and shots from different parts of the image, especially on the face. They also assume that each of the movements mentioned above in live scans occurs on a different scale in terms of natural agility and frequency of the associated muscle groups. However, these measures can only detect static spoofing attacks (for example, based on images) and need a certain amount of observation time with a frame rate high enough to be able to resolve the above mentioned motion vectors to their profiles. required speed and frequency, if any. They can also falsely reject live subjects that are very quiet during the scan, or falsely accept static reproductions with additional movement, for example, folding and shaking fake photos in certain ways. [005] A second category of existing anti-counterfeiting measures assumes that the photographic or video reproduction of the biometric sample is not of sufficient quality and, therefore, methods of texture analysis by image can identify the forgery. However, the assumption of discernibly low quality counterfeit reproduction is unreliable, especially with the advent of advanced high-definition display and high-quality display technologies that can still be found on modern smartphones and tablets. Not surprisingly, by relying on specific technology-dependent counterfeiting reproduction disturbances, these techniques have been shown to depend on data and have demonstrated subpar generalization capabilities. Another category of anti-counterfeiting measures, related to the second, which is based on metrics of reference image quality or without reference, suffers the same Petition 870200030894, dated 3/6/2020, p. 53/100 3/39 deficiencies. Summary [006] In several implementations described here, the detection of physical properties indicating the presence of a person live is used to distinguish authentic live faces from images / videos, checks made under duress, and other forged and fraudulent authentication methods, and / or to identify forgeries, such as by detecting the presence of devices used to reproduce images / videos / other physical reconstructions recorded by the legitimate user to falsify a biometric system. This is achieved, in part, by (a) detecting counterfeit signatures and (b) verifying an individual's live and physical presence using three-dimensional facial detection and two-factor pulse identification. [007] Certainly, in one aspect a method implemented by a computer includes the steps of: emitting, using an audio output component from a user device, one or more audio signals; receiving, using an audio output component from the user device, one or more reflections of the audio signals outside a target; determine, based on one or more reflections, whether the target comprises at least one of a face-like structure and face-like tissue; and determining whether the target is a forgery based, at least partially, on determining whether the target comprises at least one of a face-like structure and face-like tissue. The user device can be, for example, a mobile device including a smartphone, a tablet, or a laptop. One or more audio signals can include short coded pulse ping, short-term squeaks, or CTFM ping. One or more characteristics of one or more audio signals can be randomized. [008] In an implementation, the method also includes the steps of: training a classifier to identify the target's physical resources; and Petition 870200030894, dated 3/6/2020, p. 54/100 4/39 providing information to the classifier based on one or more reflections of the audio signals outside the target, in which the determination of whether the target is a forgery is still based, at least partially, on an output of the classifier received in response to the input provided. [009] In another implementation, the method also includes the steps of: receiving a plurality of images of the target; and determine, based on the reflections of the light detected in the images, whether the target comprises a three-dimensional face type structure. [0010] In an additional implementation, the method also includes the steps of: receiving a plurality of images of the target; and identifying, based on the images, whether the target has a first pulse, where determining whether the target is a forgery is still based, at least partially, on identifying whether the target has a pulse. The first pulse can be identified using remote photoplethysmography. [0011] In yet another implementation, a second pulse of the target is identified through physical contact with the target, in which the determination of whether the target is a forgery is still based, at least partially, on the measurement of the second pulse. A determination can be made as if the second pulse correlates with the first pulse, where the determination of whether the target is a forgery is still based, at least partially, on the correlation. Measuring the second pulse may include receiving information associated with the second pulse from the user's device or other portable or usable device. The information associated with the second pulse may include a balistocardiographic signal. [0012] The details of one or more modalities of the matter described in this report are set out in the attached drawings and in the description below. Other characteristics, aspects and advantages of the material will become evident from the description, drawings and kingPetição 870200030894, dated 06/03/2020, p. 55/100 5/39 vindications. Brief Description of the Drawings [0013] Figures 1A-1C represent several use cases for anti-counterfeiting and presence detection. [0014] Figure 2 represents a method for anti-counterfeiting and presence detection according to an implementation. [0015] Figure 3 represents an example of direct and indirect acoustic passages for a sound probe pulse between a headset and microphone. [0016] Figures 4 and 5 represent an example of demodulated echoes from the combined filter that show reflections for a monitor screen and the real face, respectively. [0017] Figures 6 and 7 represent reflections of different facets of a face and a monitor screen, respectively. [0018] Similar reference numbers and designations in the various drawings indicate similar elements. Detailed Description [0019] Described here, in various implementations, are tracking systems and methods to provide anti-counterfeiting and presence detection technology based on multi-stage software, which combines three-dimensional sound (3D) face detection using reflections from face modulated sound with multisource / multipath vital signs detection. As used herein, presence refers to characteristics that tend to indicate the presence of a living person (not a fake or imitation of a living person, such as a pre-recorded image or video of an eye or face, three-dimensional modeled head, etc. .). Such features may include, for example, recognizable physical properties, such as a face, a pulse, a breathing pattern, and so on. Face refers to features that tend to indicate the presence of a real face, such as face Petition 870200030894, dated 3/6/2020, p. 56/100 6/39 real (as opposed to reproduced) eyes, nose, mouth, chin and / or other facial features and tissue arranged in a recognizable pattern. This definition of a face can be increased by the inclusion of passive or active sonic, physical and / or electromagnetic signatures of real faces (as opposed to fake faces). [0020] The present invention provides a new physics-based solution that can be implemented entirely in software and, among other things, detects counterfeit screen reproduction, regardless of its quality. It overcomes the flaws of existing vision-based anti-counterfeiting solutions by assessing the likelihood that a real 3D face will be presented to a user's device by examining its sonic (and / or photometric) signature, all in a way that is transparent to the user . Advantageously, this technique detects forgeries for biometric authentication using only typical headphones / sound transducers and microphones in various everyday environments. The resulting sound signatures using existing mobile device hardware are weak and challenged by multiple confounding factors, which the methodology described overcomes. Sources for the aforementioned sonic signal / noise ratio mentioned above include unwanted echoes, as well as the non-linearities of the acoustic path and bandwidth limitations (including those of transducers), the directionality and sensitivity of the microphone / headphones and reverberations the device. In addition, given the longer wavelengths of the audio bands used, spatial resolution is reduced compared to existing ultrasonic sonar systems and most of the target reflections are dissipated through dispersion, providing indirect detection of embedded sound signatures. , as detailed here. [0021] In an implementation, the anti-counterfeiting and presence detection technology includes checking the presence of a structure Petition 870200030894, dated 3/6/2020, p. 57/100 7/39 three-dimensional face type and target pulse measurement using multiple sources. Three-dimensional face detection can be performed using face-modulated sound reflections (for example, from a high-encoded probe signal emitted from a headset or other sound transducer, similar to sonar, with reflections from signals received from a microphone or other audio input) and / or structured light photometric stereo (for example, from standardized fast illuminations of a phone screen). Vital detection, such as detecting a user's pulse, can be measured by the pumping action of the heart that induces face color changes and hand / body variations. The detection of the heartbeat can be performed through multiple passages: mechanical vibrations induced by the heartbeat of the body, also known as balistocardiograms, and pulse detection of the skin color changes registered by a red-green-blue (RGB) camera, also known as remote photoplethysmography (PPG, or remote rPPG). A user's pulse can also be detected by other mobile / wearable devices with heart rate sensors. [0022] Figures 1A-1C illustrate several use cases for anti-counterfeiting and live presence analysis technology, as described here. For example, in Figure 1A, a target user 104 uses his mobile device 102 (for example, smartphone, tablet, etc.) to authenticate using a biometric reading (for example, eye scan) captured by the mobile device's camera. In addition to the camera, the mobile device 102 can use other sensors, such as an accelerometer, gyroscope, finger heart rate sensor, vibration sensor, audio output component, (for example, speaker, headset, or other sound transducer), audio output component (eg microphone), and the like in order to verify the physical presence of the user, using the techniques currently described. In FiPetição 870200030894, of 03/06/2020, p. 58/100 8/39 Figure 1B, the mobile device 102 captures an image or video of a target on an LCD monitor 106 or other screen. The software running on the mobile device 102 can determine that the target is not physically present using current techniques, for example, three-dimensional face detection, evaluation of reflected light and / or audible signals, and pulse detection. Figure 1C represents a second user 110 holding the mobile device 102 and directing it to target user 104. In this example, although the physical presence of target user 104 has been established (for example, by three-dimensional facial structure and visual pulse recognition) , a secondary pulse reading by the mobile device 102 through physical contact between the device 102 and the second user 110 would not correspond to a visual pulse identified for the target user 104 and thus the verification of the user's identity would fail. [0023] Other techniques for anti-counterfeiting and analysis of living presence can be used in conjunction with the technology described here. Such techniques include those described in US Patent Application No. 14 / 480,802, filed September 9, 2014 and entitled Systems and Methods for Liveness Analysis and US Patent Application 14 / 672,629, filed March 30, 2015 and entitled Bio Leash for User Authentication all incorporated herein by reference in their entirety. [0024] An implementation of a method for detecting forgery and presence is shown in Figure 2. Starting at STEP 202, a user device (for example, mobile device 102 or another cell phone, smartphone, tablet, virtual reality device or another device used in a user interaction with biometric optimization, such as logging into a banking application when using biometric eye verification), detects whether a three-dimensional (3D) face-like object is positioned in front of the Petition 870200030894, dated 3/6/2020, p. 59/100 9/39 device and is not a fake, like a video pre-recorded on a flat screen. [0025] The 3D facial detection in STEP 202 can be performed using various methods or combinations thereof and can be based on the availability of certain sensors and transmitters on a user's device. In an implementation, sound waves (for example, high frequency sound waves) are used to determine whether a three-dimensional face or, alternatively, a flat screen or a faceless 3D object, is being presented to a biometric sensor (for imaging with based on biometrics based on a face or any of its sub-regions, including the ocular sub-region, the biometric sensor may include, for example, a mobile device camera). An example of a sonic technique (sound wave) is continuous transmission frequency modulation (CTFM), in which the distance to different facets / surfaces of a face is measured based on the time-varying frequency of a face. sonic probe transmitted by a measuring device (for example, an audio output component (headset, speaker, sound transducer) of a device in conjunction with an audio input component (microphone) of the same or a a different device). In the case of biometric authentication, the sonic distance measurement can also be used to confirm that the measured interocular distance corresponds to an expected interocular distance that was determined at the time of target registration. The precedent is an example of a real-life measurement measurement check, although it is noted that other measurements of device distance from the face, such as those coming from the camera's focus mechanism, can also be used. Techniques for 3D facial detection are described in detail below. [0026] In another implementation, the existence and extent of the es Petition 870200030894, dated 3/6/2020, p. 60/100 10/39 photometric terees are analyzed for characteristics that tend to indicate the presence of a three-dimensional face-like object. The validity of the photometric effects can also be combined with the previously mentioned sonic measured distances and, optionally, in comparison with stereo photometric data collected during a biometric enrollment phase. Photic measurements can use distortion to work with cameras that have lower frame rates if the device's screen can be driven at a higher frame rate, making the temporal variations of the screen-induced photic probe more imperceptible to the user. Please note that if the aforementioned three-dimensional characteristics are measured more accurately, the 3D profiles of the user's face, as determined using sonic and / or photometric measurements at the time of valid registration, may become user-specific to some extent and may induce more specificity (such as soft biometric) in the anti-counterfeiting measures described here. [0027] If a 3D face-like structure is detected, the device can optionally check the presence further, detecting whether the face-like structure has a pulse present and within an expected range (using, for example, facial rPPG based on images captured by a device camera) (STEP 208). Otherwise, if no 3D facial structure is detected, the presence rejection will fail and the target will be rejected (STEP 230). If a valid pulse is detected, an object similar to the 3D face with apparent circulation has been established as the first phase of presence detection and anti-counterfeiting. This phase limits counterfeiting attacks to 3D structures similar to the pulsating skin, which can ignore the rPPG, which is a high bar. [0028] In a secondary phase, the system can optionally try to correlate the primary detected pulse of the similar structure Petition 870200030894, dated 3/6/2020, p. 61/100 11/39 te to the face (for example, rPPG of the face after 3D sonic and / or photometric facial verification) with a second pulse measurement obtained through a different method for stronger presence / anti-counterfeiting detection (STEP 212 and 216) . Secondary pulse measurement can be performed, for example, using balistocardiogram signals, which can be captured based on beats of manual devices induced by the heart's pumping action and measured by device motion sensors or a pulse-sensitive device , if available, or another suitable secondary path to check your heart rate or harmonics. If no secondary pulse is detected or the secondary pulse is invalid (for example, falling outside an expected range), or if the correlation fails (for example, the system detects that the pulses do not match the rate or other characteristics), the target is rejected (STEP 230). On the other hand, if the previous steps verify the presence, the target can be accepted as a living and legitimate user (STEP 220). It should be noted that the verification phases described in this implementation need not be carried out in the order described; instead, the alternate order of steps is also observed. For example, one or more pulse measurements can be taken first, with 3D facial detection used to strengthen a presence versus falsification conclusion determined based on the pulse measurements. In addition, not all steps need to be performed (for example, a determination of the existence of a counterfeit can be made exclusively in 3D facial detection). 3D sonic face measurement [0029] This sonic technique detects whether there is a face (expected for biometric eye scanning or legitimate face) or another structurally faceless object (for example, flat panel or other counterfeiting scenario) that is being displayed for a biometric sensor (for example Petition 870200030894, dated 3/6/2020, p. 62/100 12/39 example, the front camera of a cell phone). The technique is used for image-based biometrics using the face or its subregions, including ocular sub-regions. Examples of sonic ping that can be used for 3D face measurement include, but are not limited to, short coded pulse ping, short-term wheezing and CTFM. [0030] Short coded pulse ping includes those ping where the maximum correlation codes (for example, Barker 2 to Barker 13 patterns, either in their original form or encoded binary phase shift encoding) and / or short-term hisses (for example, linear frequency scans with an envelope like a Kaiser window) are sent through an audio output component, such as a headset or other onboard sound transducer. If there are multiple audio output components, the acoustic beam formation can be used to spatially focus sonic ping. The combined filter or autocorrelation decoding of echoes from the aforementioned pulse compression techniques allows the reconstruction of the target's coarse 3D signature (which also reflects its texture and material structure due to the acoustic impedance of the impacted facets). This information is presented to a user device through the flight time and the morphology of the received echo, similar to what is seen in the sonar and radar systems. The corresponding filter implies a cross correlation of the received echo with the original ping signal. Automatic echo correlation alone can be used instead, where an immediate copy received from the current signal effectively becomes the detection model. In any case, additional post-processing, such as calculating the amplitude of the analytical version of the decoded signal, is carried out before the selection and classification of the resource. [0031] For CTFM ping, the distance from different faces Petition 870200030894, dated 3/6/2020, p. 63/100 13/39 targets / surfaces of the target (here, the user's face or a counterfeit screen) is measured based on the variable time frequency of the high-tone sonic probe transmitted by the device (for example, through a headset). telephone). [0032] In some implementations, sonic distance measurement is also used to check the total distance from the face to ensure an appropriate match to the expected interocular distance measured through the image at the time of biometric enrollment (verification of the real-life scale measurement) . In some implementations, the low signal-to-noise ratio of the echo can be overcome by the average of multiple pinging and / or multi-microphone beam formation and noise cancellation. [0033] It should be noted that there are two aspects of this technique: (i) rejection of non-facial objects (for example, fake screens), and (ii) acceptance of 3D facial-type sonic profiles, especially those that are similar to those of the registered user (for example, user-specific sonic facial models created during enrollment), increasing anti-counterfeiting accuracy, considering the specificity of the subject. The last aspect uses facial signature learning from sound reflections (presentation learning), which can be performed using well-known machine learning techniques, such as sets of classifiers and deep learning. The accuracy of the 3D sonic facial profile recognition can be increased by including auxiliary signals from an image sensor. For example, echo profiles will change if the user is wearing glasses or covering part of the face with a scarf. Image analysis can reveal these changes and adjust the rating modules accordingly, for example, using the appropriate standards and thresholds for those circumstances. 3D Photometric Face Measurement Petition 870200030894, dated 3/6/2020, p. 64/100 14/39 [0034] In some implementations, after the detection of sonic facial structure (or before or simultaneously with the detection of sonic facial structure), the 3D face measurement is reinforced when examining the existence and extent of the 3D interrogating facial structure lighting variations, such as photometric stereo induced by high frequency patterns of a mobile device screen encoded with light intensity, phase and frequency and color (structured screen lighting). Stereo photometric effects generally depend on the distance from the light source and therefore can be combined with the measured distances from the sonar mentioned earlier. [0035] In other implementations, photometric verification signatures can be compared with one or more photometric signatures derived during a user's enrollment, in order to make these measures subject-specific for greater sensitivity and specificity. By combining the enhanced 3D sonic and photometric easy profile, the combination can not only detect forgeries with better accuracy, while still preventing the rejection of real users, but also detect user-specific sonic-photometric face signatures as a smooth biometric and, therefore, further increase the performance of the primary biometric modality as an additional soft identification modality. [0036] Photic measurements can also take advantage of the distortion of the image sensor for a better user experience if, for example, the device's screen can be driven at a higher frame rate to make the temporal variations of the induced photic probe imperceptible per screen. That is, if the camera is driven at a lower frame rate than the screen, you can use the distorted frequency component of the structured light and continue as normal. Cardiac Measurement Petition 870200030894, dated 3/6/2020, p. 65/100 15/39 [0037] In some implementations, if the face is sonically and / or photometrically validated, the existence (and, in some cases, the value) of a facial pulse can be detected / measured from the front camera of a mobile device in an observation period that is shorter than necessary for a complete calculation of the rPPG pulse rate. This quick scan limits counterfeiting attacks to 3D structures similar to the pulsating skin, which is a very high bar. This pulse identification step can serve as a complementary layer of anti-counterfeiting protection after sonic (and, optionally, photometric) measurement. [0038] In other implementations, for an even stronger presence check, the proposed method measures and validates the user's multipass cardiac vital signs by crossing. A cardiac signal can be determined based, for example, on the previously validated 3D face rPPG. Additional cardiac signals (or their main harmonics) can be recovered from balistocardiogram signals (for example, vibrations from the hand device and its harmonics induced by the mechanical pumping action of the heart and measured by the device's motion sensors and, optionally , corroborated by small correlated vibrations detected from the device's camera feeds after rigorous signal processing and movement amplification). These additional heart signals can be acquired by other heart rate sensors when available, such as health compatibility, or other heart rate sensors built into the user's mobile device. In some implementations, the signals from the motion sensors are pre-processed by bandpass filtering in the frequency ranges of the targeted heartbeat and its harmonics. In other implementations, heart rate harmonics are used as primary ba Petition 870200030894, dated 3/6/2020, p. 66/100 16/39 listocardiogram. In other implementations, the balistocardiogram is augmented by correlated cardio-induced movement amplified as seen, for example, by the camera of the mobile device. [0039] After the detection of a pulse and a significant real-time correlation between multiple cardiac signals (for example, rPPG and balistocardiogram pulse measurements), a greater likelihood of presence can be guaranteed. This presence score in the cardiac loop can be, for example, the correlation / similarity strength in real time between the two cardiac signals (balistocardiogram and rPPG). This additional anti-counterfeiting layer closes the cardiac vitality verification loop, from the retaining hand (mechanical path) to the perceived validated face / eye (optical and acoustic paths) using the heartbeat of the user looking for biometric verification. [0040] The technology currently described may incorporate several techniques for detecting heartbeat that are known in the art and are described, for example, in US Patent No. 8,700,137, issued on April 14, 2014 and entitled Cardiac Performance Monitoring System for Use with Mobile Communications Devices; Biophone: Physiology Monitoring from Peripheral Smartphone Motions, Hernandez, McDuff and Picard, Engineering in Medicine and Biology Society, 2015 37th IEEE International Conference Annual, 2015; and Exploiting Spatial Redundancy of Image Sensor for Motion Robust rPPG, Wang, Stuijk, and de Haan, IEEE Transactions on Biomedical Engineering, vol.62, no.2, Feb. 2015, all of which are incorporated by reference in their entirety. Additional Implementations [0041] Now with reference to Figure 3, during a biometric scan of the user's face and / or eye regions with a front camera, according to the techniques described here, a headset Petition 870200030894, dated 3/6/2020, p. 67/100 17/39 of telephone 302 (and / or other acoustic transducers, including several speakers in a beam formation arrangement focused on targeted facial areas) emits a series of signals to acoustically interrogate the user's face in perceived interaction. The microphone (s) of the phone 304 collect (s) the signal reflections, mainly from the face, in the case of live authentication. However, it is possible for a screen or other reproduction of the face to be presented instead during a forgery attack. In some implementations, where the device's bottom microphone 304 is used, the start of the probe signal emission is detected by the time stamp of the first transmission heard by microphone 304 as the first and highest copy received from the acoustic probe (Route 0 ), given the speed of sound and the acoustic impedance as the ping travels through / through the body of the phone. The original signal is used in conjunction with its echo received by the phone's microphone 304 for corresponding filtering (may include signal / echo received via external Route 1, in which the signal travels through the air from the headset 302 to the microphone and external Route 2, in which the signal reflects outside the target and is received by microphone 304). Examples of acoustic ping include pulse compression and / or maximum correlation sequences, such as short squeaks or Barker / M sequence codes. [0042] In some implementations, if available, a forward-facing microphone is used to improve directionality, suppression of background noise and detection of the start of the probe signal departure. The directional polar patterns of a device microphone, such as Cardioid, can be selected for better directional reception. Multiple microphones in the device, if available, can be used to form the beam for better targeting and thus better reception of the facial echo. [0043] In some implementations, the autocorrelation of the sound re Petition 870200030894, dated 3/6/2020, p. 68/100 18/39 flex is used to decode the fake / facial echo component of the reflected sound. This method can produce better demodulation, since the corresponding filter kernel is essentially the actual transmitted version of the probe's wave pattern here. In other implementations, the probe signal is of the CTFM type and, therefore, heterodification is used to resolve the spatial profile and the distance from the target structure. Ultimately, a classifier can decide the perceived face based on resources extracted from the demodulated echoes of any number of the methods mentioned above. [0044] Based on the characteristics of the echo, as recorded by the device's microphone (s), there are different ways to determine whether the sound was reflected off the user's face as opposed to a counterfeit screen or other counterfeit object , observing the specific multifaceted 3D shape of a face and its absorption / reflectance properties versus that of, for example, a two-dimensional forgery, such as an LCD reproduction of the facial or ocular regions of interest. [0045] Figures 4 and 5 describe examples of demodulated combined filter echoes using the Barker-2 code sequence for the first 20 cm of the flight of the acoustic path, where several acoustic reflections through Route 0, 1 and 2 (see Figure 3) are clearly observed. More particularly, Figure 4 represents the reflection caused by a monitor screen approximately 10-12 cm from the phone that emits the pulse, while Figure 5 represents the different echo signature caused by a real human face approximately 10-14 cm in length. front of the phone. [0046] In some implementations, the sonic probe signal is a maximum correlation signal, such as Barker order codes from order 2 to 13 (either in its original form or with binary phase change modulation (BPSK, binary phase -shift keying), where the fre Petition 870200030894, dated 3/6/2020, p. 69/100 19/39 carrier frequency changes its phase by 180 degrees for each bit level change), or pseudo-random M sequences. In some implementations, the sonic probe signal is composed of small hisses (of various frequency ranges, envelopes and sweep and amplitude). The polling signal can be, for example, a CTFM signal. These short, high-frequency signals are transmitted from an audio output component, such as a headset (in the case of a smartphone or tablet, for example, because, of course, it faces the target when using the front camera to catches). In some implementations, however, other sound transducers from various devices are used for beam formation to better concentrate the sonic probe on the biometric target. [0047] The sonic probe signal can take various forms among the implementations of the disclosed techniques. For example, in an implementation, the sonic probe signal is a CTFM signal with linear hisses in Hanning windows that scan from 16 kHz to 20 kHz. In another implementation, the probe signal is a maximum correlation sequence, such as the Barker-2 sequence with 180 degree shift sine BPSK at a carrier frequency of 11.25 kHz sampled at 44100 Hz. In an additional implementation, the probe signal is a window hiss signal. The hiss can be, for example, a cosine signal with a fixed frequency of 11.25 kHz, sweeping linearly to 22.5 kHz in 10 ms and sampled at 44100 Hz. The window function can be a Kaiser window of length 440 samples (10 ms with a sampling rate of 44.1 kHz), with a Beta value of 6. The previous values represent the probe signal parameters providing reasonably accurate results. It should be noted, however, that the probe signal parameters that provide accurate results may vary according to the characteristics of the device and audio input / output component. Consequent Petition 870200030894, dated 3/6/2020, p. 70/100 20/39, other ranges of values are contemplated for use with current techniques. [0048] In some implementations, the initial phase, the frequency and the exact start of the reproduction of the emitted probe signal, or even the type of code itself, can be randomized (for example, for a Barker probe pulse train encoded with PSK) by the mobile biometric module. This randomization can thwart hypothetical (albeit comprehensive and sophisticated) attacks where a reproduced false echo is played on the phone to defeat the proposed sonic facial scanner. Because the random phase / type / start / frequency in the PSK modification phase of the encoded sonic sequence or other dynamic attributes of the output probe is not known by the attacker, the hypothetical echoes injected will not be demodulated by the corresponding filter, nor will they follow expected patterns exact. [0049] During the basic procedure of Barker code / hiss / CTFM, the reflection of the probe signal, delayed in time (and therefore the frequency for CTFM) based on its round-trip distance, is recorded by the device microphone (s) or other component (s) (s) of audio input. The original or otherwise encoded hiss sound probe can be detected by a corresponding filter or autocorrelation (for Barker and short hiss), or demodulated for baseband by multiplying the echo by the original frequency ramp and taking the less frequent by-product (heterodyne ). Each impacted facet of the target reflects the probe's pulse in a manner related to its textural and structural properties (for example, difference in acoustic impedance between the air and the impacted surface, as well as its size and shape) and distance from the sonic source (delay round trip sound). Thus, in its simplistic form (assuming no unwanted background noise and echoes), a face will have multiple reflections at lower magnitudes (reflected by its multiple main facets) Petition 870200030894, dated 3/6/2020, p. 71/100 21/39 at the air-skin and soft-bone interfaces), while, for example, a counterfeit monitor screen will have a single stronger reflection (compare Figures 4 and 5). [0050] Given the round trip delay of each reflection, the distance from each reflected target facet can be translated into time delay in corresponding filter / autocorrelation response or a frequency delta in the Spectral Power Density or PSD (see Figures 6 and 7, described below below), providing a target-specific echo morphology. To calculate PSD resources from CTFM signals, different methods can be used. In some implementations, a multitaper method is applied to a 0-200 Hz extension of the demodulated echo and the binary output is used as input to a classifier that can be, for example, a Linear or Gaussian kernel Support Vector machine, or similar. [0051] More specifically, in various implementations, one or more of the following steps are taken for demodulation of pulse with hissed / coded pulse coding and target classification. In one case, the sonic probe avoids the loud noise of the acoustic environment, frequently checking the readings from the microphone (for example, every 100ms, every 500ms, every 1, etc.), listening for the potentially disturbing noise. This verification may include calculating the correlation (convolution with inverted time), coded pulse probe signal and setting the trigger limit to that obtained in a reasonably quiet environment. In some implementations, a similar additional check is conducted right after touching the sounder signal to determine if a disturbing noise has occurred just after the sonic pig. If so, the session can be dropped. Multiple hisses can also be averaged (or processed by the median) at the signal or decision score level to improve results. Petition 870200030894, dated 3/6/2020, p. 72/100 22/39 [0052] In one implementation, pre-processing involves high-pass filtering of the received signal to allow only frequencies relevant to the transmitted hiss / coded signal. This high pass filter can be, for example, an equiripple finite impulse response filter with a stop band frequency of 9300 Hz, a band pass frequency of 11750 Hz, a stop band attenuation of 0.015848931925, wave ripple. bandpass of 0.037399555859 and density factor of 20. [0053] In some implementations, demodulation includes a normalized cross-correlation of the received high-pass echo with the original / encoded sonic signal (equivalent to the normalized convolution with the sonic probe inverted version). The maximum response is considered the start / origin of the decoded signal. Demodulation may include, for example, autocorrelation of the 0.227 ms signal portion before the aforementioned start, at 2.27 ms + the length of the hiss / coded signal time, after the start marker. Post-processing of the demodulated signal may include calculating the magnitude of its analytical signal (a complex helical sequence composed of the actual signal plus an imaginary 90-degree phase change version) to further clarify the demodulated echo envelope. In an implementation, assuming a sampling rate of 44100 Hz, the first 100 samples of the magnitude above the analytical signal are further multiplied by a linear part weighting factor that is 1 for the first 20 samples and increases linearly to 5 for the samples 21-100, to compensate for the sound attenuation due to the distance traveled. Other weighting factors can be used, such as one that follows a second order regime. [0054] Figure 6 represents multiple reflections of different facets of a face (three samples are shown, using a technique Petition 870200030894, dated 3/6/2020, p. 73/100 23/39 CTFM). These echoes reveal specific spatial facial structures (as opposed to counterfeit signatures). This is due to different delays (and magnitudes) of different sonic passages, as detected by demodulated sonic probe echoes. In contrast, the forgery screen shown in Figure 7 mainly causes a single large spike during demodulation. The challenges can arise from the low spatial resolution and high dispersion of a typical face due to a higher frequency limitation of 20 KHz imposed by the audio circuits of certain phones. Other challenges include variations caused by user behavior and background noise and movement / reflection disturbances induced by the uncontrolled environment and general low SNR due to limitations of the device's audio circuits, all addressed by the techniques described here. [0055] In some implementations, the feature set for the aforementioned classifier is a collection of subsets selected for the best classification performance using a random subset classification technique. The random subspace classifier set can be, for example, a collection merged by sum rules of neighboring classifiers closest to k, or a collection of supporting vector machines that work on a set of resource vectors of the decoded analytical signals. Appendices A and B provide classifiers and input spaces derived experimentally using a random subspace set construction methodology. Appendix A lists a set of examples of 80 resource vectors selected using a large set of training / test data, consisting of more than 18,000 recorded echoes from real users, as well as several counterfeit screens, using random subspace sampling together with kNN set classifiers. Subspaces were obtained based on the average cross-validation performance (measured Petition 870200030894, dated 3/6/2020, p. 74/100 24/39 through the analysis of the ROC curve) of different subspace configurations (ie, sample entry locations and dimensions, as well as the number of participating classifiers). The column location of each number shows the digital signal sample number from the decoded start of the coded / hiss signal transmission, using a sampling rate of 44100 Hz. In another implementation, the subspace set is a collection of classifiers of Support Vector Machines with Gaussian kernels that receive the set of 40 resource vectors from the decoded analytical signals listed in Appendix B and selected as a subset of the 80 resources in Appendix A based on their Fisher Discriminating Indexes (from the Linear Discriminating Fisher classification using the larger data set). Again, the column location of each number shows the digital signal sample number from the decoded start of hiss / coded signal transmission, using a sampling rate of 44100 Hz. [0056] To accurately identify representations of specific faces (and not just generic faces versus fakes) in the echo space, in some implementations, the sonar classifier is trained to be subject specific (and possibly device specific, as the following method accommodates the combined peculiarities of the user-device). This functionality can be obtained by training the classifier to distinguish the sonic resources of the user, acquired during biometric enrollment, against that of a representative impostor population (and not just forgeries, for subject specificity). Another advantage of this method is that the resources obtained during registration also reflect the specificities of the device used for registration and, therefore, the classifiers are adapted to the acoustic peculiarities of the specific device. The user-specific sonic pattern detector (and Petition 870200030894, dated 3/6/2020, p. 75/100 25/39 of the resulting device) can be used as part of a more accurate anti-counterfeiting classifier tuned by user (and device), where this subject-specific classification is combined with the aforementioned detection classifier. In some implementations, the user-specific sonic profile detector itself can be used as a smooth biometric. [0057] The acoustic interrogations above the ocular / facial biometric target can be enhanced by facial photometric responses to the structured light transmitted by a mobile scene interrogation device, for better anti-counterfeiting. In some implementations, structured light is in the form of coded intensity, coded color variations, coded spatial distribution and / or coded phase variations of the light transmitted by the device, for example, via built-in LCD or LED light sources. The aforementioned codes can be defined in terms of specific frequency regimes and maximum correlation sequences, such as Barker or M sequences. In other implementations, the photometric profiles of the user's face are previously calculated based on generic population profiles of users versus fakes (user agnostic photometric face). [0058] In addition, in some implementations, the 3D profiles of the user's face, as detected by the user's photometric reflections at the time of validated registration, are learned by classifiers for user specificity. Together with the sonic modality, or alone, these user-specific portfolios can be used as smooth biometrics, which also induces more subject specificity and, therefore, precision in these anti-counterfeiting measures. [0059] The systems and techniques described here can be implemented in a computing system that includes a backend component (for example, as a data server), or that includes a Petition 870200030894, dated 3/6/2020, p. 76/100 26/39 middleware component (for example, an application server), or that includes a front component (for example, a client computer with a graphical user interface or a web browser through which a user can interact with a implementation of the systems and techniques described here), or any combination of these rear, middleware or front components. The system components can be interconnected by any form or means of digital data communication (for example, a communication network). Examples of communication networks include a local area network (LAN), a wide area network (WAN) and the Internet. [0060] The computer system can include clients and servers. A client and server are usually far apart and can interact through a communication network. The client and server relationship arises because of computer programs running on the respective computers and having a client-server relationship between them. Various modalities have been described. However, it will be understood that various modifications can be made without departing from the spirit and scope of the invention. [0061] The modalities of the subject and the operations described in this specification can be implemented in digital electronic circuits, or in software, firmware or hardware, including the structures described in this specification and their structural equivalents, or in combinations of one or more of these. The modalities of the matter described in this specification can be implemented as one or more computer programs, that is, one or more computer program instruction modules, encoded in a computer storage medium for execution by, or to control the operation of a data processing device. Alternatively or in addition, the program instructions can be coded in Petition 870200030894, dated 3/6/2020, p. 77/100 27/39 an artificially generated propagated signal, for example, an electrical, optical or electromagnetic signal generated by a machine, which is generated to encode information for transmission to a receiver apparatus suitable for execution by a data processing apparatus. A computer storage medium can be, or be included, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of these. In addition, while a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially generated propagated signal. The computer's storage medium can also be or be included in one or more separate physical components or media (for example, multiple CDs, discs, or other storage devices). [0062] The operations described in this specification can be implemented as operations performed by a data processing device on data stored on one or more storage devices readable by computer or received from other sources. [0063] The term data processing apparatus encompasses all types of data processing apparatus, devices and machines, including, for example, a programmable processor, a computer, a system on a chip or multiple, or combinations, of the foregoing . The devices can include special-purpose logic circuits, for example, an FPGA (field programmable port matrix) or an ASIC (application-specific integrated circuit). The device can also include, in addition to the hardware, the code that creates an execution environment for the computer program in Petition 870200030894, dated 3/6/2020, p. 78/100 28/39 issue, for example, code that constitutes the processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine or a combination of one or more of these. The device and the execution environment can carry out various infrastructures of different computing models, such as web services, distributed computing infrastructure and grid computing. [0064] A computer program (also known as a program, software, software application, script or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and can be deployed in any way. any form, including as a standalone program or as a module, component, subroutine, object or other unit suitable for use in a computational environment. A computer program can, but need not, correspond to a file in a file system. A program can be stored in a part of a file that has other programs or data (for example, one or more scripts stored in a markup language resource), in a single file dedicated to the program in question or in several coordinated files ( for example, files that store one or more modules, subprograms or chunks of code). A computer program can be deployed to run on one computer or on multiple computers that are located on one site or distributed across multiple sites and interconnected over a communication network. [0065] The modalities of the matter described in this specification can be implemented in a computing system that includes a backup component, for example, as a data server, or that includes a middleware component, for example, a Petition 870200030894, dated 3/6/2020, p. 79/100 29/39 application server or that includes a front component, for example, a client computer with a graphical user interface or a web browser through which a user can interact with an implementation of the material described in this specification or any combination of one or more of these rear, middleware or front components. The system components can be interconnected by any form or means of digital data communication, for example, a communication network. Examples of communication networks include a local area network (LAN) and a wide area network (WAN), an inter-network (for example, the Internet) and P2P networks (for example, P2P hoc networks). [0066] The computer system can include clients and servers. A client and server are often distant from each other and can interact through a communication network. The client and server relationship arises due to computer programs running on the respective computers and having a client-server relationship between them. In some embodiments, a server transmits data (for example, an HTML page) to a client device (for example, for the purpose of displaying data and receiving user input from a user interacting with the client device). Data generated on the client device (for example, a result of user interaction) can be received from the client device on the server. [0067] A system of one or more computers can be configured to perform specific operations or actions by having software, firmware, hardware or a combination of them installed in the system that in operation causes or causes the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions, by virtue of including instructions that, when executed by a data processing device, cause the device to perform the actions. Petition 870200030894, dated 3/6/2020, p. 80/100 30/39 Appendix A [0068] Resource Vector Set 1: Classifier 1: 7, 9, 14, 15, 18, 20, 24, 27, 35, 37, 40, 45, 55, 58, 60, 64, 65, 70, 80, 81, 98, 100 Classifier 2: 6, 12, 13, 23, 26, 36, 44, 47, 50, 52, 58, 59, 63, 64, 67, 76, 77, 85, 86, 87, 89, 92 Classifier 3: 10, 21, 22, 25, 31, 32, 34, 37, 38, 46, 49, 62, 72, 73, 80, 82, 83, 84, 86, 90, 93, 95 Classifier 4: 1, 2, 5, 8, 15, 17, 20, 22, 23, 28, 29, 30, 41, 42, 51, 56, 61, 78, 83, 94, 96, 99 Classifier 5: 3, 4, 12, 16, 28, 30, 32, 37, 39, 43, 45, 54, 57, 60, 63, 66, 76, 78, 84, 87, 88, 97 Classifier 6: 4, 11, 13, 19, 27, 31, 39, 44, 47, 48, 49, 53, 58, 69, 71, 74, 75, 91, 93, 94, 99, 100 Classifier 7: 1, 2, 4, 6, 8, 9, 11, 13, 26, 33, 36, 41, 50, 51, 54, 67, 68, 69, 73, 79, 85, 90 Classifier 8: 10, 14, 17, 18, 19, 24, 33, 34, 36, 38, 41, 43, 52, 55, 59, 60, 68, 92, 93, 96, 98, 100 Classifier 9: 8, 17, 22, 23, 24, 25, 27, 30, 35, 40, 46, 56, 57, 62, 63, 70, 71, 72, 79, 88, 89, 99 Classifier 10: 3, 5, 9, 11, 29, 42, 58, 61, 62, 63, 66, 71, 75, 77, 80, 81, 82, 90, 94, 95, 96, 97 Classifier 11: 1, 3, 6, 14, 16, 21, 25, 32, 34, 35, 38, 39, 48, 49, 53, 55, 66, 70, 75, 78, 80, 97 Classifier 12: 7, 10, 15, 20, 24, 31, 33, 36, 40, 43, 44, 50, 52, 65, 67, 74, 76, 85, 91, 96, 98, 99 Classifier 13: 9, 16, 19, 20, 26, 41, 46, 47, 48, 49, 51, 68, 69, 73, 77, 82, 83, 84, 87, 89, 91, 95 Classifier 14: 2, 6, 8, 11, 18, 23, 26, 28, 29, 35, 38, 42, 45, 57, 61, 62, 64, 72, 88, 93, 96, 100 Petition 870200030894, dated 3/6/2020, p. 81/100 31/39 Classifier 15: 6, 12, 19, 20, 21, 37, 42, 43, 53, 54, 58, 59, 61, 70, 73, 74, 77, 78, 79, 83, 86, 93 Classifier 16: 3, 5, 6, 7, 18, 28, 30, 35, 39, 47, 51, 54, 55, 56, 65, 72, 82, 85, 86, 89, 90, 92 Classifier 17: 1, 2, 7, 31, 33, 34, 36, 39, 46, 56, 59, 64, 65, 66, 67, 69, 75, 79, 81, 86, 87, 92 Classifier 18: 9, 12, 13, 14, 15, 16, 17, 21, 27, 41, 44, 45, 49, 52, 57, 74, 76, 77, 81, 88, 91, 95 Classifier 19: 5, 17, 26, 29, 30, 45, 46, 48, 63, 65, 67, 68, 71, 72, 74, 75, 76, 88, 92, 96, 97, 98 Classifier 20: 1, 9, 13, 19, 21, 22, 25, 27, 37, 47, 50, 51, 53, 60, 61, 66, 70, 78, 79, 84, 95, 98 Classifier 21: 1, 2, 11, 12, 16, 18, 29, 32, 40, 42, 48, 50, 57, 62, 71, 73, 83, 84, 87, 90, 94, 100 Classifier 22: 3, 4, 7, 10, 15, 23, 25, 26, 31, 32, 33, 41, 43, 52, 56, 58, 76, 82, 88, 91, 92, 99 Classifier 23: 3, 4, 5, 7, 8, 12, 13, 22, 23, 33, 34, 38, 40, 44, 54, 60, 62, 63, 64, 89, 94, 97 Classifier 24: 10, 14, 15, 16, 20, 21, 27, 30, 42, 45, 47, 53, 68, 69, 72, 74, 79, 80, 81, 84, 89, 97 Classifier 25: 10, 11, 24, 28, 29, 32, 43, 44, 52, 64, 65, 66, 70, 71, 75, 77, 85, 87, 90, 94, 95, 100 Classifier 26: 5, 8, 16, 29, 33, 36, 37, 40, 52, 53, 54, 55, 56, 57, 59, 60, 69, 73, 82, 86, 91, 97 Classifier 27: 2, 5, 6, 12, 17, 22, 25, 34, 35, 39, 46, 48, 55, 59, 61, 64, 73, 75, 78, 79, 90, 99 Classifier 28: 2, 4, 9, 18, 24, 27, 31, 34, 36, 37, 42, 43, 44, 66, 78, 80, 81, 83, 85, 93, 96, 98 Classifier 29: 4, 5, 8, 13, 14, 17, 18, 19, 22, 26, 28, 38, 45, 46, 49, 51, 58, 60, 61, 72, 89, 93 Petition 870200030894, dated 3/6/2020, p. 82/100 32/39 Classifier 30: 20, 21, 27, 29, 31, 38, 40, 41, 50, 54, 58, 64, 65, 67, 68, 69, 81.82, 92, 94, 98, 100 Classifier 31: 3, 4, 7, 9, 11, 19, 25, 26, 28, 30, 33, 53, 54, 55, 57, 65, 67, 71, 76, 80, 83, 86 Classifier 32: 2, 8, 10, 12, 14, 21, 23, 32, 35, 36, 47, 49, 56, 62, 69, 70, 77, 82, 84, 91, 95, 99 Classifier 33: 1, 14, 17, 18, 24, 28, 34, 39, 48, 51, 53, 59, 63, 67, 74, 85, 87, 88, 89, 95, 97, 100 Classifier 34: 3, 10, 11, 13, 15, 23, 28, 31, 35, 43, 46, 50, 51, 55, 60, 63, 68, 71, 77, 85, 88, 98 Classifier 35: 1, 6, 19, 38, 41, 42, 44, 45, 46, 47, 56, 57, 58, 61, 70, 73, 79, 81, 84, 90, 92, 100 Classifier 36: 16, 24, 25, 30, 32, 35, 37, 40, 48, 50, 52, 56, 64, 65, 66, 68, 72, 75, 76, 80, 87, 94 Classifier 37: 6, 7, 8, 39, 48, 54, 55, 57, 59, 63, 67, 74, 78, 79, 82, 86, 87, 89, 91, 93, 96, 99 Classifier 38: 4, 13, 15, 20, 23, 29, 31, 39, 40, 41, 42, 43, 47, 49, 50, 53, 59, 72, 73, 75, 82, 84 Classifier 39: 7, 15, 16, 17, 20, 22, 25, 27, 49, 51, 60, 62, 65, 76, 77, 80, 86, 91, 92, 93, 95, 97 Classifier 40: 1, 11, 14, 22, 24, 26, 28, 30, 35, 36, 38, 41, 49, 52, 56, 61, 78, 83, 90, 92, 96, 99 Classifier 41: 2, 9, 12, 18, 21, 30, 33, 34, 44, 47, 49, 61, 69, 71, 74, 76, 77, 81, 84, 85, 93, 94 Classifier 42: 3, 8, 12, 19, 22, 26, 31, 32, 42, 48, 50, 51, 64, 66, 67, 70, 79, 83, 87, 91, 98, 100 Classifier 43: 4, 6, 10, 21, 23, 34, 37, 44, 45, 46, 52, 55, 57, 58, 59, 60, 63, 68, 75, 78, 79, 94 Classifier 44: 2, 5, 7, 11, 13, 23, 24, 39, 41, 43, 57, 62, 70, 72, 74, 77, 80, 84, 88, 94, 97, 100 Petition 870200030894, dated 3/6/2020, p. 83/100 33/39 Classifier 45: 3, 5, 10, 14, 16, 21,32, 33, 34, 39, 45, 64, 70, 73, 74, 83, 87, 88, 89, 90, 96, 99 Classifier 46: 10, 15, 18, 19, 20, 25, 26, 29, 40, 52, 55, 58, 62, 68, 78, 81, 85, 86, 89, 93, 96, 98 Classifier 47: 1, 8, 10, 15, 27, 30, 32, 33, 36, 38, 48, 53, 54, 66, 67, 69, 70, 71, 85, 95, 97, 98 Classifier 48: 2, 3, 5, 7, 9, 14, 22, 28, 43, 47, 50, 51, 53, 54, 65, 71, 73, 76, 81, 82, 83, 92 Classifier 49: 4, 6, 16, 17, 25, 31, 35, 41, 42, 45, 50, 51, 55, 62, 68, 77, 79, 80, 83, 86, 87, 95 Classifier 50: 1, 5, 9, 12, 13, 17, 18, 21, 24, 28, 37, 38, 39, 40, 61, 63, 69, 70, 73, 75, 82, 91 Classifier 51: 2, 3, 11, 15, 19, 26, 27, 29, 32, 34, 36, 37, 44, 48, 56, 59, 62, 66, 69, 71, 90, 93 Classifier 52: 8, 12, 14, 20, 22, 35, 47, 52, 54, 57, 60, 63, 64, 65, 69, 72, 78, 81, 84, 88, 91, 96 Classifier 53: 4, 8, 17, 29, 31, 42, 43, 46, 48, 53, 56, 58, 60, 61, 62, 65, 66, 68, 75, 76, 86, 94 Classifier 54: 7, 13, 15, 16, 19, 20, 21, 24, 25, 33, 36, 49, 70, 80, 86, 89, 90, 94, 95, 98, 99, 100 Classifier 55: 2, 6, 7, 10, 13, 18, 19, 22, 23, 29, 30, 40, 57, 58, 65, 66, 67, 72, 73, 88, 92, 99 Classifier 56: 1, 6, 9, 11, 18, 20, 27, 30, 38, 44, 59, 74, 75, 78, 82, 84, 85, 86, 89, 91, 92, 97 Classifier 57: 5, 12, 26, 33, 37, 38, 39, 42, 45, 46, 49, 52, 54, 56, 60, 66, 71, 73, 77, 90, 91, 94 Classifier 58: 6, 8, 16, 26, 28, 34, 35, 41, 44, 45, 46, 49, 50, 63, 68, 72, 79, 83, 87, 96, 97, 99 Classifier 59: 1, 4, 17, 23, 27, 29, 30, 31, 40, 43, 50, 51, 61, 64, 67, 68, 74, 76, 81, 93, 95, 100 Petition 870200030894, dated 3/6/2020, p. 84/100 34/39 Classifier 60: 2, 3, 11, 13, 23, 24, 25, 35, 47, 49, 52, 56, 57, 59, 71, 74, 75, 79, 81.88, 96, 98 Classifier 61: 1, 7, 9, 12, 16, 17, 22, 32, 34, 36, 37, 46, 53, 72, 76, 77, 82, 85, 87, 88, 92, 95 Classifier 62: 3, 4, 11, 14, 17, 18, 22, 24, 25, 31, 50, 51, 54, 55, 57, 63, 78, 80, 87, 89, 92, 97 Classifier 63: 5, 6, 20, 21, 24, 32, 33, 36, 37, 38, 39, 43, 44, 46, 47, 60, 64, 66, 67, 69, 83, 90 Classifier 64: 7, 10, 14, 15, 19, 27, 28, 35, 40, 45, 48, 53, 54, 59, 61, 78, 82, 84, 85, 96, 98, 100 Classifier 65: 1, 8, 12, 15, 27, 29, 34, 40, 41, 44, 47, 52, 53, 55, 58, 59, 66, 70, 80, 89, 93, 97 Classifier 66: 2, 5, 6, 9, 10, 14, 26, 28, 31, 42, 43, 56, 60, 62, 63, 74, 80, 81, 90, 95, 98, 99 Classifier 67: 11, 13, 18, 20, 21, 27, 37, 38, 41, 42, 45, 51, 61, 62, 70, 76, 77, 82, 83, 88, 91, 93 Classifier 68: 2, 3, 9, 11, 12, 15, 19, 25, 27, 32, 36, 40, 49, 68, 69, 71, 72, 75, 85, 90, 98, 99 Classifier 69: 13, 16, 17, 18, 26, 29, 30, 32, 36, 39, 41, 46, 48, 55, 58, 61, 64, 65, 67, 79, 86, 100 Classifier 70: 1, 4, 23, 25, 30, 33, 34, 44, 45, 54, 60, 73, 77, 79, 84, 86, 89, 93, 94, 96, 98, 100 Classifier 71: 2, 4, 10, 13, 20, 22, 28, 34, 37, 38, 44, 45, 50, 58, 67, 69, 73, 81, 87, 91, 92, 94 Classifier 72: 8, 9, 11, 18, 19, 31, 47, 48, 54, 56, 57, 58, 62, 64, 68, 72, 74, 75, 84, 88, 97, 99 Classifier 73: 3, 4, 5, 21, 24, 33, 35, 40, 42, 43, 53, 55, 59, 63, 64, 65, 78, 83, 84, 85, 95, 97 Classifier 74: 7, 9, 16, 17, 20, 29, 32, 36, 39, 47, 51, 52, 53, 58, 59, 70, 71, 76, 80, 89, 93, 94 Petition 870200030894, dated 3/6/2020, p. 85/100 35/39 Classifier 75: 5, 10, 12, 14, 19, 23, 26, 33, 41, 44, 56, 57, 59, 60, 62, 69, 72, 75, 91, 92, 95, 99 Classifier 76: 22, 25, 31, 35, 38, 42, 43, 46, 50, 65, 66, 67, 78, 81, 83, 85, 86, 87, 89, 90, 97, 99 Classifier 77: 1, 2, 3, 8, 10, 11, 37, 49, 54, 61, 63, 66, 68, 69, 71, 75, 76, 77, 78, 79, 83, 100 Classifier 78: 1, 5, 8, 14, 20, 23, 24, 26, 28, 32, 35, 39, 46, 48, 52, 53, 55, 73, 80, 84, 88, 93 Classifier 79: 3, 6, 7, 14, 16, 21, 29, 30, 37, 47, 52, 55, 60, 61, 62, 70, 74, 79, 81, 82, 92, 100 Classifier 80: 7, 15, 22, 25, 31, 34, 35, 36, 41, 44, 45, 48, 49, 51, 53, 56, 72, 73, 77, 80, 81, 82 Appendix B [0069] Resource Vector Set 2: Classifier 1: 7, 9, 14, 15, 18, 20, 24, 27, 35, 37, 40, 45, 55, 58, 60, 64, 65, 70, 80, 81, 98, 100 Classifier 2: 1, 2, 5, 8, 15, 17, 20, 22, 23, 28, 29, 30, 41, 42, 51, 56, 61, 78, 83, 94, 96, 99 Classifier 3: 3, 4, 12, 16, 28, 30, 32, 37, 39, 43, 45, 54, 57, 60, 63, 66, 76, 78, 84, 87, 88, 97 Classifier 4: 4, 11, 13, 19, 27, 31, 39, 44, 47, 48, 49, 53, 58, 69, 71, 74, 75, 91, 93, 94, 99, 100 Classifier 5: 1, 2, 4, 6, 8, 9, 11, 13, 26, 33, 36, 41, 50, 51, 54, 67, 68, 69, 73, 79, 85, 90 Classifier 6: 3, 5, 9, 11, 29, 42, 58, 61, 62, 63, 66, 71, 75, 77, 80, 81, 82, 90, 94, 95, 96, 97 Classifier 7: 7, 10, 15, 20, 24, 31, 33, 36, 40, 43, 44, 50, 52, 65, 67, 74, 76, 85, 91, 96, 98, 99 Classifier 8: 2, 6, 8, 11, 18, 23, 26, 28, 29, 35, 38, 42, 45, 57, 61, 62, 64, 72, 88, 93, 96, 100 Petition 870200030894, dated 3/6/2020, p. 86/100 36/39 Classifier 9: 3, 5, 6, 7, 18, 28, 30, 35, 39, 47, 51, 54, 55, 56, 65, 72, 82, 85, 86, 89, 90, 92 Classifier 10: 5, 17, 26, 29, 30, 45, 46, 48, 63, 65, 67, 68, 71, 72, 74, 75, 76, 88, 92, 96, 97, 98 Classifier 11: 3, 4, 7, 10, 15, 23, 25, 26, 31, 32, 33, 41, 43, 52, 56, 58, 76, 82, 88, 91, 92, 99 Classifier 12: 3, 4, 5, 7, 8, 12, 13, 22, 23, 33, 34, 38, 40, 44, 54, 60, 62, 63, 64, 89, 94, 97 Classifier 13: 5, 8, 16, 29, 33, 36, 37, 40, 52, 53, 54, 55, 56, 57, 59, 60, 69, 73, 82, 86, 91, 97 Classifier 14: 2, 5, 6, 12, 17, 22, 25, 34, 35, 39, 46, 48, 55, 59, 61, 64, 73, 75, 78, 79, 90, 99 Classifier 15: 2, 4, 9, 18, 24, 27, 31, 34, 36, 37, 42, 43, 44, 66, 78, 80, 81, 83, 85, 93, 96, 98 Classifier 16: 4, 5, 8, 13, 14, 17, 18, 19, 22, 26, 28, 38, 45, 46, 49, 51, 58, 60, 61, 72, 89, 93 Classifier 17: 3, 4, 7, 9, 11, 19, 25, 26, 28, 30, 33, 53, 54, 55, 57, 65, 67, 71, 76, 80, 83, 86 Classifier 18: 4, 13, 15, 20, 23, 29, 31, 39, 40, 41, 42, 43, 47, 49, 50, 53, 59, 72, 73, 75, 82, 84 Classifier 19: 4, 6, 10, 21, 23, 34, 37, 44, 45, 46, 52, 55, 57, 58, 59, 60, 63, 68, 75, 78, 79, 94 Classifier 20: 2, 5, 7, 11, 13, 23, 24, 39, 41, 43, 57, 62, 70, 72, 74, 77, 80, 84, 88, 94, 97, 100 Classifier 21: 3, 5, 10, 14, 16, 21, 32, 33, 34, 39, 45, 64, 70, 73, 74, 83, 87, 88, 89, 90, 96, 99 Classifier 22: 2, 3, 5, 7, 9, 14, 22, 28, 43, 47, 50, 51, 53, 54, 65, 71, 73, 76, 81, 82, 83, 92 Classifier 23: 4, 6, 16, 17, 25, 31, 35, 41, 42, 45, 50, 51, 55, 62, 68, 77, 79, 80, 83, 86, 87, 95 Petition 870200030894, dated 3/6/2020, p. 87/100 37/39 Classifier 24: 1, 5, 9, 12, 13, 17, 18, 21, 24, 28, 37, 38, 39, 40, 61, 63, 69, 70, 73, 75, 82, 91 Classifier 25: 4, 8, 17, 29, 31, 42, 43, 46, 48, 53, 56, 58, 60, 61, 62, 65, 66, 68, 75, 76, 86, 94 Classifier 26: 2, 6, 7, 10, 13, 18, 19, 22, 23, 29, 30, 40, 57, 58, 65, 66, 67, 72, 73, 88, 92, 99 Classifier 27: 5, 12, 26, 33, 37, 38, 39, 42, 45, 46, 49, 52, 54, 56, 60, 66, 71, 73, 77, 90, 91, 94 Classifier 28: 1, 4, 17, 23, 27, 29, 30, 31, 40, 43, 50, 51, 61, 64, 67, 68, 74, 76, 81, 93, 95, 100 Classifier 29: 1, 7, 9, 12, 16, 17, 22, 32, 34, 36, 37, 46, 53, 72, 76, 77, 82, 85, 87, 88, 92, 95 Classifier 30: 3, 4, 11, 14, 17, 18, 22, 24, 25, 31, 50, 51, 54, 55, 57, 63, 78, 80, 87, 89, 92, 97 Classifier 31: 5, 6, 20, 21, 24, 32, 33, 36, 37, 38, 39, 43, 44, 46, 47, 60, 64, 66, 67, 69, 83, 90 Classifier 32: 7, 10, 14, 15, 19, 27, 28, 35, 40, 45, 48, 53, 54, 59, 61, 78, 82, 84, 85, 96, 98, 100 Classifier 33: 2, 5, 6, 9, 10, 14, 26, 28, 31, 42, 43, 56, 60, 62, 63, 74, 80, 81, 90, 95, 98, 99 Classifier 34: 2, 3, 9, 11, 12, 15, 19, 25, 27, 32, 36, 40, 49, 68, 69, 71, 72, 75, 85, 90, 98, 99 Classifier 35: 1, 4, 23, 25, 30, 33, 34, 44, 45, 54, 60, 73, 77, 79, 84, 86, 89, 93, 94, 96, 98, 100 Classifier 36: 2, 4, 10, 13, 20, 22, 28, 34, 37, 38, 44, 45, 50, 58, 67, 69, 73, 81, 87, 91, 92, 94 Classifier 37: 3, 4, 5, 21, 24, 33, 35, 40, 42, 43, 53, 55, 59, 63, 64, 65, 78, 83, 84, 85, 95, 97 Classifier 38: 7, 9, 16, 17, 20, 29, 32, 36, 39, 47, 51, 52, 53, 58, 59, 70, 71, 76, 80, 89, 93, 94 Petition 870200030894, dated 3/6/2020, p. 88/100 38/39 Classifier 39: 5, 10, 12, 14, 19, 23, 26, 33, 41, 44, 56, 57, 59, 60, 62, 69, 72, 75, 91, 92, 95, 99 Classifier 40: 1, 5, 8, 14, 20, 23, 24, 26, 28, 32, 35, 39, 46, 48, 52, 53, 55, 73, 80, 84, 88, 93 [0070] Although this specification has many specific implementation details, these should not be interpreted as limitations on the scope of any inventions or what can be claimed, but rather as descriptions of specific characteristics of particular modalities of particular inventions. Certain features that are described in this specification in the context of separate modalities can also be implemented in combination in a single modality. On the other hand, several features that are described in the context of a single modality can also be implemented in multiple modalities separately or in any suitable subcombination. In addition, although the characteristics can be described above as acting on certain combinations and even initially claimed as such, the resources of one or more of the claimed combination can, in some cases, be excised from the combination, and the claimed combination can be targeted for a subcombination or variation of a subcombination. [0071] Similarly, while operations are represented in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown in sequential order, or that all illustrated procedures be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing can be advantageous. Furthermore, the separation of various components of the system in the modalities described above should not be understood as requiring such separation in all modalities, Petition 870200030894, dated 3/6/2020, p. 89/100 39/39 and it should be understood that the components and systems described in the program can generally be integrated into a single software product or packaged into multiple software products. [0072] Thus, specific modalities of the matter were described. Other modalities are within the scope of the following claims. In some cases, the actions recited in the claims may be carried out in a different order and still obtain desirable results. In addition, the processes described in the attached figures do not necessarily require that the particular order shown, or the sequential order, obtain desirable results. In certain implementations, multitasking and parallel processing can be advantageous.
权利要求:
Claims (21) [1] 1. Computer-implemented method comprising: output, using an audio output component of a user device (102), one or more audio signals; receiving, using an audio output component of the user device (102), one or more reflections of the audio signals outside a target (104); determining, based on one or more reflections, whether the target (104) comprises at least one of a face-like structure and face-like fabric; receiving a plurality of images of a target (104); identify, based on the images, if the target (104) has a first pulse; measuring a second pulse of the target (104) through physical contact with the target (104); and characterized by the fact that it still comprises: determining whether target (104) is a forgery based at least in part on (i) determining whether target (104) comprises at least one of a face-like structure and a face-like fabric, (ii) on identification of whether the target (104) has a pulse; and (iii) in the measurement of the second pulse. [2] 2. Method according to claim 1, characterized by the fact that the user device is a mobile device (102) comprising a smartphone, a tablet or a laptop. [3] 3. Method according to claim 1, characterized by the fact that one or more audio signals comprise short coded pulse ping, short-term squeaks, or CTFM ping. [4] 4. Method, according to claim 1, characterized by the fact that it also comprises: train a classifier to identify physical resources Petition 870200030894, dated 3/6/2020, p. 91/100 2/5 of the target (104); and providing, as input to the classifier, information based on one or more reflections of the audio signals outside the target (104), on which the determination of whether the target (104) is a forgery is still based, at least in part , in a classifier output received in response to the input provided. [5] 5. Method, according to claim 1, characterized by the fact that it also includes the randomization of one or more characteristics of one or more audio signals. [6] 6. Method, according to claim 1, characterized by the fact that it also comprises: receiving a plurality of images of the target (104); and determining, based on the reflections of the light detected in the images, whether the target (104) comprises a three-dimensional face type structure. [7] 7. Method, according to claim 1, characterized by the fact that the first pulse is identified using remote photoplethysmography. [8] 8. Method according to claim 1, characterized by the fact that it further comprises determining whether the second pulse correlates with the first pulse, in which the determination of whether the target (104) is a forgery is still based, at least partially, in the correlation. [9] 9. Method according to claim 1, characterized in that the measurement of the second pulse comprises receiving the information associated with the second pulse from the user device (102) or other portable or usable device. [10] 10. Method according to claim 9, characterized by the fact that the information associated with the second pulse Petition 870200030894, dated 3/6/2020, p. 92/100 3/5 comprises a balistocardiographic signal. [11] 11. System comprising: at least one memory; at least one processing unit for executing the memory, where the execution of the memory for the processing unit performs the operations comprising: output, using an audio output component of a user device (102), one or more audio signals; receiving, using an audio output component of the user device (102), one or more reflections of the audio signals outside a target (104); determine, based on one or more reflections, whether the target (104) comprises at least one of a face-like structure and face-like fabric; receiving a plurality of images of the target (104); identify based on the images, if the target (104) has a first pulse; measuring a second pulse of the target (104) through physical contact with the target (104); and characterized by the fact that the processing unit still performs an operation comprising: determine whether the target (104) is a forgery based at least in part (i) on determining whether the target (104) comprises at least one of a face-like structure and face-like fabric, (ii) the identification of if the target (104) has a pulse, and (iii) in the measurement of the second pulse. [12] 12. System according to claim 11, characterized by the fact that the user device is a mobile device (102) comprising a smartphone, a tablet or a laptop. [13] 13. System according to claim 11, characterized Petition 870200030894, dated 3/6/2020, p. 93/100 4/5 due to the fact that one or more audio signals comprise short coded pulse ping, short-term squeaks or CTFM ping. [14] 14. System, according to claim 11, characterized by the fact that the operations still comprise: train a classifier to identify the target's physical resources (104); and providing, as input to the classifier, information based on one or more reflections of the audio signals outside the target (104), in which the determination of whether the target (104) is a forgery is still based, at least partially , in a classifier output received in response to the input provided. [15] 15. System, according to claim 11, characterized by the fact that the operations still comprise the randomization of one or more characteristics of one or more audio signals. [16] 16. System, according to claim 11, characterized by the fact that the operations still comprise: receiving a plurality of images of the target (104); and determining, based on the reflections of the light detected in the images, whether the target (104) comprises a three-dimensional face type structure. [17] 17. System, according to claim 11, characterized by the fact that the first pulse is identified using remote photoplethysmography. [18] 18. The system according to claim 11, characterized by the fact that the operations still comprise determining whether the second pulse correlates with the first pulse, in which the determination of whether the target (104) is a forgery is still based on , at least partially, in the correlation. [19] 19. System according to claim 11, characterized Petition 870200030894, dated 3/6/2020, p. 94/100 5/5 by the fact that measuring the second pulse comprises receiving information associated with the second pulse from the user device (102) or another portable or usable device. [20] 20. System, according to claim 19, characterized by the fact that the information associated with the second pulse comprises a balistocardiographic signal. [21] 21. Non-transient computer-readable medium that, when executed, causes a processor to perform the operations, which comprises: output, using an audio output component of a user device (102), one or more audio signals; receiving, using an audio output component of the user device (102), one or more reflections of the audio signals outside a target (104); determine, based on one or more reflections, whether the target (104) comprises at least one of a face-like structure and face-like fabric; receiving a plurality of images of the target (104); identify, based on the images, if the target (104) has a first pulse; measuring a second pulse of the target (104) through physical contact with the target (104); and characterized by the fact that the processor still performs an operation comprising: determining whether target (104) is a forgery based, at least in part, on (i) determining whether target (104) comprises at least one of a face-like structure and face-like tissue, (ii) on identifying whether the target (104) has a pulse, and (iii) measuring the second pulse.
类似技术:
公开号 | 公开日 | 专利标题 US10108871B2|2018-10-23|Systems and methods for spoof detection and liveness analysis ES2688399T3|2018-11-02|Systems and methods for life analysis KR101356358B1|2014-01-27|Computer-implemented method and apparatus for biometric authentication based on images of an eye US10652749B2|2020-05-12|Spoof detection using proximity sensors US9430627B2|2016-08-30|Method and system for enforced biometric authentication JP2008257553A|2008-10-23|Biometrics device and biometrics method CN107077550A|2017-08-18|Biology contact for user's checking US20190332757A1|2019-10-31|Method and apparatus for authenticating a user of a computing device JP6607755B2|2019-11-20|Biological imaging apparatus and biological imaging method US20170112383A1|2017-04-27|Three dimensional vein imaging using photo-acoustic tomography CN108431820B|2021-12-03|Method and apparatus for birefringence-based biometric authentication Farrukh et al.2020|FaceRevelio: a face liveness detection system for smartphones with a single front camera Zhou et al.2021|Securing Face Liveness Detection Using Unforgeable Lip Motion Patterns US20200302156A1|2020-09-24|System architecture and method of authenticating a 3-d object US20220044014A1|2022-02-10|Iris authentication device, iris authentication method and recording medium
同族专利:
公开号 | 公开日 TWI695182B|2020-06-01| RU2020106758A|2020-02-28| PH12017502353A1|2018-06-25| CN110110591A|2019-08-09| RU2018101202A|2019-07-16| US20170357868A1|2017-12-14| TW201716798A|2017-05-16| JP2018524072A|2018-08-30| RU2018101202A3|2019-09-27| US20160371555A1|2016-12-22| RU2715521C2|2020-02-28| EP3311330A1|2018-04-25| CN107851182B|2019-04-19| AU2016278859B2|2019-07-18| WO2016204968A1|2016-12-22| CA2989575C|2020-03-24| RU2020106758A3|2020-05-18| HK1251691B|2020-02-21| US9665784B2|2017-05-30| CA2989575A1|2016-12-22| KR20190143498A|2019-12-30| KR20180019654A|2018-02-26| KR102131103B1|2020-07-07| SG10201906574VA|2019-09-27| AU2016278859A1|2018-01-18| CN107851182A|2018-03-27| RU2725413C2|2020-07-02| KR102061434B1|2019-12-31| MX2017016718A|2018-11-09| CN110110591B|2021-01-15| US10108871B2|2018-10-23| JP6735778B2|2020-08-05|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 JP2005296463A|2004-04-14|2005-10-27|Toyota Motor Corp|Biological information measurement device| RU2296599C2|2005-06-16|2007-04-10|Александр Абрамович Певзнер|Method for treating organism| RU56668U1|2006-01-16|2006-09-10|Федеральное государственное унитарное предприятие "Научно-исследовательский институт "Восход"|AUTOMATED SYSTEM OF IDENTIFICATION AND AUTHENTICATION OF CITIZENS BY BIOMETRIC PERSONALITY| US7886156B2|2006-09-18|2011-02-08|John Franco Franchi|Secure universal transaction system| CN101396277A|2007-09-26|2009-04-01|中国科学院声学研究所|Ultrasonics face recognition method and device| US8978117B2|2007-11-19|2015-03-10|Avaya Inc.|Authentication frequency and challenge type based on environmental and physiological properties| US8150108B2|2008-03-17|2012-04-03|Ensign Holdings, Llc|Systems and methods of identification based on biometric parameters| US8401875B2|2010-03-12|2013-03-19|Os - New Horizons Personal Computing Solutions Ltd.|Secured personal data handling and management system| US8892461B2|2011-10-21|2014-11-18|Alohar Mobile Inc.|Mobile device user behavior analysis and authentication| EP2661219B1|2011-01-05|2019-05-15|Koninklijke Philips N.V.|Device and method for extracting information from characteristic signals| DE102011102202A1|2011-05-21|2012-11-22|Volkswagen Aktiengesellschaft|Environment detection device in a motor vehicle and method for environment detection using a correlation| GB201114406D0|2011-08-22|2011-10-05|Isis Innovation|Remote monitoring of vital signs| US9202105B1|2012-01-13|2015-12-01|Amazon Technologies, Inc.|Image analysis for user authentication| US8984622B1|2012-01-17|2015-03-17|Amazon Technologies, Inc.|User authentication through video analysis| US8806610B2|2012-01-31|2014-08-12|Dell Products L.P.|Multilevel passcode authentication| CN102622588B|2012-03-08|2013-10-09|无锡中科奥森科技有限公司|Dual-certification face anti-counterfeit method and device| US8725113B2|2012-03-12|2014-05-13|Google Inc.|User proximity control of devices| US8831295B2|2012-03-21|2014-09-09|Authentec, Inc.|Electronic device configured to apply facial recognition based upon reflected infrared illumination and related methods| US8396265B1|2012-06-26|2013-03-12|Google Inc.|Facial recognition| US8457367B1|2012-06-26|2013-06-04|Google Inc.|Facial recognition| US8700137B2|2012-08-30|2014-04-15|Alivecor, Inc.|Cardiac performance monitoring system for use with mobile communications devices| JP2015532164A|2012-10-23|2015-11-09|コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V.|Device and method for obtaining vital sign information of organisms| JP6146078B2|2013-03-22|2017-06-14|富士通株式会社|Biological detection device, face authentication device, imaging device, biological detection method, and biological detection program| US9231765B2|2013-06-18|2016-01-05|Arm Ip Limited|Trusted device| JP6308742B2|2013-09-13|2018-04-11|旭化成株式会社|Blood pressure information output device, blood pressure information output program, medium, blood pressure information output method| CN103617419B|2013-12-02|2018-10-16|广州微盾科技股份有限公司|Finger vein identification device with live finger detecting function and method| US9524421B2|2013-12-09|2016-12-20|Google Inc.|Differentiating real faces from representations| US20160007935A1|2014-03-19|2016-01-14|Massachusetts Institute Of Technology|Methods and apparatus for measuring physiological parameters| RU2737509C2|2014-04-07|2020-12-01|Айверифай Инк.|User authentication biometric binding| KR20150117543A|2014-04-10|2015-10-20|삼성전자주식회사|Apparatus and method for managing authentication| US9251427B1|2014-08-12|2016-02-02|Microsoft Technology Licensing, Llc|False face representation identification| US9396537B2|2014-09-09|2016-07-19|EyeVerify, Inc.|Systems and methods for liveness analysis| JP2016131604A|2015-01-16|2016-07-25|セイコーエプソン株式会社|Biological information measurement system, biological information measurement device, and biological information measurement method| US10049287B2|2015-05-22|2018-08-14|Oath Inc.|Computerized system and method for determining authenticity of users via facial recognition|CN104756135B|2012-09-05|2018-11-23|埃利蒙特公司|System and method for biological characteristic validation relevant to the equipment equipped with camera| RU2737509C2|2014-04-07|2020-12-01|Айверифай Инк.|User authentication biometric binding| US20160005050A1|2014-07-03|2016-01-07|Ari Teman|Method and system for authenticating user identity and detecting fraudulent content associated with online activities| KR102258696B1|2015-02-02|2021-06-01|한국전자통신연구원|Appratus and method for generating global satellite system solution| CN107135348A|2016-02-26|2017-09-05|阿里巴巴集团控股有限公司|Recognition methods, device, mobile terminal and the camera of reference object| US10111103B2|2016-03-02|2018-10-23|EyeVerify Inc.|Spoof detection using proximity sensors| CN105912986B|2016-04-01|2019-06-07|北京旷视科技有限公司|A kind of biopsy method and system| US10311219B2|2016-06-07|2019-06-04|Vocalzoom Systems Ltd.|Device, system, and method of user authentication utilizing an optical microphone| US10335045B2|2016-06-24|2019-07-02|Universita Degli Studi Di Trento|Self-adaptive matrix completion for heart rate estimation from face videos under realistic conditions| WO2018002275A1|2016-06-30|2018-01-04|Koninklijke Philips N.V.|Method and apparatus for face detection/recognition systems| US10289823B2|2016-07-22|2019-05-14|Nec Corporation|Camera system for traffic enforcement| US10290197B2|2016-08-15|2019-05-14|Nec Corporation|Mass transit surveillance camera system| CN107066079A|2016-11-29|2017-08-18|阿里巴巴集团控股有限公司|Service implementation method and device based on virtual reality scenario| CN108263338B|2016-12-30|2021-06-15|华为技术有限公司|Automobile, steering wheel and driver identity recognition method| CN107122642A|2017-03-15|2017-09-01|阿里巴巴集团控股有限公司|Identity identifying method and device based on reality environment| WO2018175603A1|2017-03-21|2018-09-27|Sri International|Robust biometric access control using physiological-informed multi-signal correlation| WO2019002831A1|2017-06-27|2019-01-03|Cirrus Logic International Semiconductor Limited|Detection of replay attack| GB201713697D0|2017-06-28|2017-10-11|Cirrus Logic Int Semiconductor Ltd|Magnetic detection of replay attack| GB2563953A|2017-06-28|2019-01-02|Cirrus Logic Int Semiconductor Ltd|Detection of replay attack| GB201801527D0|2017-07-07|2018-03-14|Cirrus Logic Int Semiconductor Ltd|Method, apparatus and systems for biometric processes| GB201801528D0|2017-07-07|2018-03-14|Cirrus Logic Int Semiconductor Ltd|Method, apparatus and systems for biometric processes| GB201801530D0|2017-07-07|2018-03-14|Cirrus Logic Int Semiconductor Ltd|Methods, apparatus and systems for authentication| GB2567798A|2017-08-22|2019-05-01|Eyn Ltd|Verification method and system| EP3447684A1|2017-08-22|2019-02-27|Eyn Limited|Verification method and system| BR112020005325A2|2017-09-18|2020-09-24|Element, Inc.|methods, systems and media to detect forgery in mobile authentication| CN107808115A|2017-09-27|2018-03-16|联想有限公司|A kind of biopsy method, device and storage medium| GB201801661D0|2017-10-13|2018-03-21|Cirrus Logic International Uk Ltd|Detection of liveness| GB201801663D0|2017-10-13|2018-03-21|Cirrus Logic Int Semiconductor Ltd|Detection of liveness| GB201801874D0|2017-10-13|2018-03-21|Cirrus Logic Int Semiconductor Ltd|Improving robustness of speech processing system against ultrasound and dolphin attacks| GB2567503A|2017-10-13|2019-04-17|Cirrus Logic Int Semiconductor Ltd|Analysing speech signals| GB201803570D0|2017-10-13|2018-04-18|Cirrus Logic Int Semiconductor Ltd|Detection of replay attack| GB201801664D0|2017-10-13|2018-03-21|Cirrus Logic Int Semiconductor Ltd|Detection of liveness| CN111492373A|2017-10-30|2020-08-04|纽约州州立大学研究基金会|Systems and methods associated with user authentication based on acoustic echo signatures| GB201801659D0|2017-11-14|2018-03-21|Cirrus Logic Int Semiconductor Ltd|Detection of loudspeaker playback| US10951613B2|2017-12-28|2021-03-16|iProov Ltd.|Biometric methods for online user authentication| US11264037B2|2018-01-23|2022-03-01|Cirrus Logic, Inc.|Speaker identification| WO2019152983A2|2018-02-05|2019-08-08|Board Of Trustees Of Michigan State University|System and apparatus for face anti-spoofing via auxiliary supervision| CN108445643B|2018-03-12|2021-05-14|Oppo广东移动通信有限公司|Structured light projection module, detection method and device thereof, image acquisition structure and electronic device| CN108652635A|2018-03-27|2018-10-16|余海波|medical monitoring device, method and system| CN112270299A|2018-04-25|2021-01-26|北京嘀嘀无限科技发展有限公司|System and method for recognizing head movement| US10915614B2|2018-08-31|2021-02-09|Cirrus Logic, Inc.|Biometric authentication| US11037574B2|2018-09-05|2021-06-15|Cirrus Logic, Inc.|Speaker recognition and speaker change detection| US10740637B2|2018-09-18|2020-08-11|Yoti Holding Limited|Anti-spoofing| WO2020070808A1|2018-10-02|2020-04-09|富士通株式会社|Pulse wave calculation device, pulse wave calculation method, and pulse wave calculation program| US10929516B2|2018-10-08|2021-02-23|Advanced New Technologies Co., Ltd.|Dynamic grip signature for personal authentication| US10885363B2|2018-10-25|2021-01-05|Advanced New Technologies Co., Ltd.|Spoof detection using structured light illumination| US10783388B2|2018-10-26|2020-09-22|Alibaba Group Holding Limited|Spoof detection using multiple image acquisition devices| FR3089036B1|2018-11-28|2020-10-30|In Idt|device and method for authenticating an individual| US11030292B2|2018-12-11|2021-06-08|Advanced New Technologies Co., Ltd.|Authentication using sound based monitor detection| US11120111B2|2018-12-11|2021-09-14|Advanced New Technologies Co., Ltd.|Authentication based on correlation of multiple pulse signals| US11170242B2|2018-12-26|2021-11-09|Advanced New Technologies Co., Ltd.|Spoof detection using dual-band fluorescence| US10970574B2|2019-02-06|2021-04-06|Advanced New Technologies Co., Ltd.|Spoof detection using dual-band near-infraredimaging| US11138302B2|2019-02-27|2021-10-05|International Business Machines Corporation|Access control using multi-authentication factors| CN110099324B|2019-05-23|2021-04-20|歌尔科技有限公司|Earphone wearing state detection method and device and earphone| US10929677B1|2019-08-07|2021-02-23|Zerofox, Inc.|Methods and systems for detecting deepfakes| CN112446272A|2019-08-29|2021-03-05|钜怡智慧股份有限公司|Living body detection method and related device| TWI743593B|2019-11-18|2021-10-21|緯創資通股份有限公司|Live facial recognition system and method| KR102307671B1|2020-01-02|2021-10-01|주식회사 블루프린트랩|Method for face verification| AU2020101743B4|2020-05-18|2021-03-04|Ri Pty Ltd|Contactless Biometric Authentication Systems and Methods Thereof| CN111832619A|2020-06-09|2020-10-27|哈尔滨市科佳通用机电股份有限公司|Target detection data set VOC data format labeling method based on deep learning| CN111887858B|2020-08-04|2021-05-04|西安电子科技大学|Ballistocardiogram signal heart rate estimation method based on cross-modal mapping|
法律状态:
2019-12-10| B06A| Patent application procedure suspended [chapter 6.1 patent gazette]| 2020-03-24| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2020-05-12| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 31/05/2016, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201562180481P| true| 2015-06-16|2015-06-16| US62/180,481|2015-06-16| PCT/US2016/035007|WO2016204968A1|2015-06-16|2016-05-31|Systems and methods for spoof detection and liveness analysis| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|